arbitrary adversarial test example
Beyond Perturbations: Learning Guarantees with Arbitrary Adversarial Test Examples
We present a transductive learning algorithm that takes as input training examples from a distribution P and arbitrary (unlabeled) test examples, possibly chosen by an adversary. This is unlike prior work that assumes that test examples are small perturbations of P. Our algorithm outputs a selective classifier, which abstains from predicting on some examples. By considering selective transductive learning, we give the first nontrivial guarantees for learning classes of bounded VC dimension with arbitrary train and test distributions--no prior guarantees were known even for simple classes of functions such as intervals on the line. In particular, for any function in a class C of bounded VC dimension, we guarantee a low test error rate and a low rejection rate with respect to P. Our algorithm is efficient given an Empirical Risk Minimizer (ERM) for C. Our guarantees hold even for test examples chosen by an unbounded white-box adversary. We also give guarantees for generalization, agnostic, and unsupervised settings.
Review for NeurIPS paper: Beyond Perturbations: Learning Guarantees with Arbitrary Adversarial Test Examples
The paper considers learning a binary classifier in a setting where training and test examples can be from arbitrarily different distributions. The authors approach this problem by giving a selective classification algorithm -- which returns a classifier and a subset on which the classifier abstains from assigning a label -- while incurring few abstentions and few misclassification errors. Authors show that the proposed algorithm achieves optimal guarantees for classes with bounded VC dimension. This is an exciting contribution and a very timely one given the growing interest in robust machine learning and a need to better understand transfer learning. The paper is written clearly, and the results and insights in the paper are compelling.
Beyond Perturbations: Learning Guarantees with Arbitrary Adversarial Test Examples
We present a transductive learning algorithm that takes as input training examples from a distribution P and arbitrary (unlabeled) test examples, possibly chosen by an adversary. This is unlike prior work that assumes that test examples are small perturbations of P. Our algorithm outputs a selective classifier, which abstains from predicting on some examples. By considering selective transductive learning, we give the first nontrivial guarantees for learning classes of bounded VC dimension with arbitrary train and test distributions--no prior guarantees were known even for simple classes of functions such as intervals on the line. In particular, for any function in a class C of bounded VC dimension, we guarantee a low test error rate and a low rejection rate with respect to P. Our algorithm is efficient given an Empirical Risk Minimizer (ERM) for C. Our guarantees hold even for test examples chosen by an unbounded white-box adversary. We also give guarantees for generalization, agnostic, and unsupervised settings.